A holistic evaluation of the application to be tested can help shape the test plan. One valuable test to execute that can offer a holistic evaluation of the application and the environment is a smoke test. This type of evaluation should be continuous throughout the product's development life cycle to keep a pulse on the stability of the product. The holistic evaluation may instead call for a regression test to validate that existing functionality has not been broken, or an acceptance test in a production environment. Regardless of the specific test activity used in the evaluation process, the iterative and frequent evaluation provides the tester with more familiarity in the product and its development, and provides the software with a chance to show its overall strengths and weaknesses.
Prior to testing, it is important to know the software's history. What are the known strengths and weaknesses? What are the outstanding bug-fixes that did not make it to this test environment? What are customers saying about the current release? If there are enhancements that need to be tested, what was the customer expecting from them? Did the design specify more than what was originally requested, or perhaps something completely different?
Once the evaluation is done and the history is gathered, a test plan is developed. During the test planning process, several questions must be asked. What is the goal or objective of the test? Do new test scripts or scenarios need to be written? Should steps be added to existing scripts? Which test types should be utilised, and at which stage of testing? And importantly, who is on the test team? How much of their time is allotted for this test plan? Will the test plan offer solid coverage of the entire product, or are only certain functions being tested? What other test teams are reviewing this codeset? Will their test plans overlap with yours, and if so, is this necessary? How can all teams ensure there are no gaping holes in the overall development plan for the project or release being tested? (I love the Swiss cheese analogy from Why Hospitals Should Fly: The Ultimate Flight Plan to Patient Safety and Quality Care by John Nance. Since no one test will catch every type of defect, it is important to alternate tests, which exercise the product in different ways and from different angles to catch as many defects as possible. It is like looking down at a stack of Swiss cheese. Although there are holes in each slice, you cannot see through one hole from top to bottom because the holes do not match up.)
Testing begins. But how to challenge the software to expose the right defects is the question. If the original test plan is not exposing issues, or maybe not the right issues, it is time to change the plan. Re-evaluation along the way is key. If the environment is changed slightly, how does the software perform? For performance testing, running the script at different times during the day is important to measure performance during the varying traffic patterns of other users in the system. |